11 research outputs found

    Indoor Localization Using Radio, Vision and Audio Sensors: Real-Life Data Validation and Discussion

    Full text link
    This paper investigates indoor localization methods using radio, vision, and audio sensors, respectively, in the same environment. The evaluation is based on state-of-the-art algorithms and uses a real-life dataset. More specifically, we evaluate a machine learning algorithm for radio-based localization with massive MIMO technology, an ORB-SLAM3 algorithm for vision-based localization with an RGB-D camera, and an SFS2 algorithm for audio-based localization with microphone arrays. Aspects including localization accuracy, reliability, calibration requirements, and potential system complexity are discussed to analyze the advantages and limitations of using different sensors for indoor localization tasks. The results can serve as a guideline and basis for further development of robust and high-precision multi-sensory localization systems, e.g., through sensor fusion and context and environment-aware adaptation.Comment: 6 pages, 6 figure

    Enhancements of Positioning for IoT Devices

    No full text
    The aim of this thesis work is to find novel method(s), which enhance the performance of the existing Observed Time Difference of Arrival (OTDOA) positioning technique for Internet of Thing (IoT) devices introduced in Third Generation Partnership Project (3GPP) standard release 14. In this thesis work, NarrowBand IoT (NB-IoT) positioning is considered as the baseline. The scope includes the investigation on positioning reference signal (PRS), including modifying/replacing the current sequence generation by mathematical derivations, and modifying PRS configuration (e.g. increasing the density, new time/frequency resource grid). Moreover, different correlator designs at the receiver side (either operating in time or frequency domain) have been analyzed in terms of performance and complexity. In addition, we also investigate the impact of PRS transmission extension either in time domain by sending multiple subframes or in frequency domain by utilizing more spectrum resources (using more Physical Resource Blocks(PRBs)). Our results show that the current NB-IoT PRS resource mapping is already well designed. The newly suggested sequences and resource time/frequency grids influence the correlation properties which in turn affect the positioning accuracies. Increasing the number of resources for PRS transmission in time/frequency results in positioning accuracy improvements. Different sequences for PRS can lead to performance improvements with the cost of implementation complexity, and design flexibility. In addition, optimizing the original gold sequence shows a consistent good performance gain.The Internet of Things (IoT) is a technology that allows the connectivity of smart devices and items to the Internet. IoT is becoming an increasingly growing topic since it has been estimated that billions of devices will be connected to the internet by 2025. The strong growth in IoT market will trigger a revolutionary in many industries, such as healthcare, agriculture, automotive, safety and security. Thus, our daily life will be significantly affected by the development of this technology in the future. However, IoT will allow endless connections to take place which will open a door of many challenges such as strict requirements of devices power consumption, cost and complexity. Due to its wide range of applications, a lot of new standards have emerged to support IoT integration. Narrowband IoT (NB-IoT) is one of the newest cellular technologies which connects many low-cost devices in severe coverage situations with a lower power consumption and a longer battery lifetime. In 3rd Generation Partnership Project (3GPP) release 14, positioning feature has been introduced to NB-IoT which will allow the location of devices to be determined by using the transmitted positioning reference signal (PRS) at the base-station side. This positioning technique is known as Observed Time Difference of Arrival (OTDOA), because the device position can be obtained by measuring the time of arrival of multiple signals from multiple base-stations. Then device position can be estimated by measuring the difference between the time of arrivals. Nevertheless, it is a quite challenging task to implement OTDOA in NB-IoT devices, due to the constraint of low cost and the strict power consumption requirements. Furthermore, extreme coverage conditions are expected for NB-IoT devices that affect received signal levels which in turn influences the positioning accuracy. In this thesis, an OTDOA positioning simulation platform has been built. Several approaches have been implemented at transmitter side with an objective of improving positioning accuracy. The methods include generating various types of sequences, changing the original sequence mapping, and proposing new designs of the time-frequency resource grid in the system.It is shown that the current standard is well designed and has a good positioning performance. A lot of ideas have been implemented within this thesis scope, which lead to adequate accuracy under certain circumstances with acceptable complexity. For future work, more investigation of new sequences can be conducted to be used instead of the current standard. In addition, advanced low power receiver algorithms can be considered

    Amplitude and Phase Estimation for Absolute Calibration of Massive MIMO Front-Ends

    No full text
    Massive multiple-input multiple-output (MIMO) promises significantly higher performance relative to conventional multiuser systems. However, the promised gains of massive MIMO systems rely heavily on the accuracy of the absolute front-end calibration, as well as quality of channel estimates at the base station (BS). In this paper, we analyze user equipment-aided calibration mechanism to estimate the amplitude scaling and phase drift at each radio-frequency chain interfacing with the BS array. Assuming a uniform linear array at the BS and Ricean fading, we obtain the estimation parameters with moment-based (amplitude, phase) and maximum-likelihood (phase-only) estimation techniques. In stark contrast to previous works, we mathematically articulate the equivalence of the two approaches for phase estimation. Furthermore, we rigorously derive a Cramer-Rao lower bound to characterize the accuracy of the two estimators. Via numerical simulations, we evaluate the estimator performance with varying dominant line-of-sight powers, dominant angles-of-arrival, and signal-to-noise ratios

    Deep-Learning Based Channel Estimation for OFDM Wireless Communications

    No full text
    Multi-carrier technique is a backbone for modern commercial networks. However, the performances of multi-carrier systems in general depend greatly on the qualities of acquired channel state information (CSI). In this paper, we propose a novel deep-learning based processing pipeline to estimate CSI for payload time-frequency resource elements. The proposed pipeline contains two cascaded subblocks, namely, an initial denoise network (IDN), and a resolution enhancement network (REN). In brief, IDN applies a novel two-steps denoising structure while REN consists of pure fully-connected layers. Compared to existing works, our proposed processing architecture is more robust under lower signal-to-noise scenarios and delivers generally a significant gain

    Virtual Network Embedding for Collaborative Edge Computing in Optical-Wireless Networks

    No full text
    As an open integrated environment deployed with wired and wireless infrastructures, the smart city heavily relies on the wireless-optical broadband access network. Smart home data are usually sent to neighbor optical network units (ONUs) through front-end wireless mesh networks (WMNs) and finally reach the optical line terminal (OLT) for decision making via the passive optical network (PON) backhaul. To reduce backhaul bandwidth saturated by this conventional approach, smart edge devices (EDs) should be deployed at sensors and ONUs so that collaborative edge computing can be performed in front-end WMNs. Moreover, the cooperation of EDs at different ONUs is also promising for computing tasks that cannot be handled within front-end WMNs due to the local bottleneck, leading to collaborative edge computing in the PON backhaul. In this paper, network virtualization is utilized to support the coordination of computing and network resources. We also describe the relationship between virtual networks and requirements of computing tasks for substrate resources. First, a graph-cutting algorithm is employed to embed as many virtual networks as possible onto the common network infrastructure in front-end WMNs, aiming at minimizing the total transmitting power. Next, we transform impossibly embedded virtual networks into new ones that must be processed through the PON backhaul where the wavelength consumption will be optimized. Simulations results demonstrate that 1) the total transmitting power assigned for nodes is effectively reduced using the graph-cutting algorithm if all computing tasks can be solved by front-end WMNs; 2) otherwise, our method accepts more virtual networks with the improvement ratio of 77%, through the PON backhaul. In addition, there is a good match between the algorithm result and the optimal number of consumed wavelengths per optical fiber cable

    Antenna Array Configuration for Reliable Communications in Maritime Environments

    No full text
    The performance and reliability of wireless communications at sea are often limited by the deep fades caused by the coherent sea surface reflection. In this paper, we show that by employing multiple antennas at the base station, the deep fades can be mitigated within a large communication range if the antennas are carefully spaced in the vertical direction. We derive a bound for the range where mitigation of deep fading is guaranteed and evaluate this bound through numerical searching methods utilizing a realistic channel model that considers the curvature of the earth. The numerical results show that the proposed scheme leads to a better system performance compared with a free space scenario in terms of signal-to-noise ratio, if four or more antennas are employed at the base station

    Moving Object Classification with a Sub-6 GHz Massive MIMO Array Using Real Data

    No full text

    Modified Gold Sequence for Positioning Enhancement in NB-IoT

    No full text
    Positioning is an essential feature in Narrow-Band Internet-of-Things (NB-IoT) systems. Observed Time Difference of Arrival is one of the supported positioning techniques for NB-IoT. It utilizes the downlink NB positioning reference signal (NPRS) generated based on a length-31 Gold sequence. Although a Gold sequence has good auto-correlation and cross-correlation properties, the correlation properties of NPRS in NB-IoT are still sub-optimal. The reason is mainly due to two facts: the number of NPRS symbols in each subframe is limited, and the featured sampling-rate is low. In this paper, we propose to modify the NPRS generation by exploiting the cross-correlation function of the NPRS. That is, for each orthogonal frequency division multiplexing (OFDM) symbol we generate the first NPRS symbol as specified in the current standard, i.e., a Gold sequence; while the second OFDM symbol is set to the additive inverse of the first one. Our simulation results show that the proposed NPRS sequence results in improving the correlation properties, particularly with respect to the cross-correlation property. Furthermore, 15% -30% positioning-accuracy improvements can be attained with the proposed method, compared to the legacy one under both Additive White Gaussian Noise and Extended-Pedestrian-A channels. The proposed NPRS sequence can also be applied to other similar systems, such as long-term-evolution (LTE)

    Sensing and Classification Using Massive MIMO: A Tensor Decomposition-Based Approach

    No full text
    Wireless-based activity sensing has gained significant attention due to its wide range of applications. We investigate radio-based multi-class classification of human activities using massive multiple-input multiple-output (MIMO) channel measurements in line-of-sight and non line-of-sight scenarios. We propose a tensor decomposition-based algorithm to extract features by exploiting the complex correlation characteristics across time, frequency, and space from channel tensors formed from the measurements, followed by a neural network that learns the relationship between the input features and output target labels. Through evaluations of real measurement data, it is demonstrated that the classification accuracy using a massive MIMO array achieves significantly better results compared to the state-of-the-art even for a smaller experimental data set.Funding Agencies|ELLIIT; Ericsson ABEricsson</p
    corecore